Visit StickyLock

AI Glasses Shift Toward AR After MWC 2026 Launches

AI Glasses Shift Toward AR
AI Glasses Move Closer to AR as MWC 2026 Sparks Race

Smart glasses were a major highlight at the Mobile World Congress (MWC) 2026 in Barcelona, where several tech companies showcased new devices powered by artificial intelligence.

At MWC 2026, major tech companies such as Meta, Alibaba, Xiaomi, MediaTek, TCL, Google, and Samsung emphasised AI-enabled smart glasses, prototypes, and upcoming launches.

The event presented two trends: AI glasses are slowly incorporating augmented reality, and firms are escalating competition over which platforms and devices will succeed smartphones.

Some people think AI glasses could be the next big thing in mobile devices. Their development could revive the rivalry between Apple and Google’s Android, while Meta, which helped popularise AI glasses, remains active in this area. Many see this new market as a competition between several big tech companies.

Recent advances in AI that process different inputs have increased big tech interest in AI glasses. A January report from Smart Analytics Global predicted the global AI glasses market may grow rapidly in 2026.

The report said shipments could rise from 6 million in 2025 to 20 million in 2026. Market value could grow from 1.2 billion to 5.6 billion dollars in the same period. These predictions are based on strong sales of Ray-Ban Meta smart glasses and possible entries from Apple and Samsung.

MWC 2026 ended on 5 March in Barcelona and featured many AI glasses demos, including Google’s prototype. Google highlighted Gemini AI-powered smart assistants, glasses with AR/XR features, AI-driven networks, and an AI system integrated with Android. News reports noted strong attention for the AI glasses demos at the event.

Google’s prototype runs on Android XR, designed for AR devices. Current AI glasses use different AI models, including Meta’s Ray-Ban and Alibaba’s Qwen system. Android XR’s use hints that AI glasses could evolve into full AR devices by 2027.

A key distinction is that current AI glasses primarily deliver information through audio, analysing sights and sounds with cameras and microphones and providing outputs such as transcriptions or answers. They do not visually overlay information onto what the user sees, distinguishing them from AR glasses.

In contrast, augmented reality glasses display digital content such as text, images, or instructions directly overlaid onto real-world objects, permitting users to see digital information both visually and audibly in their environment.

For AI glasses, artificial intelligence mostly interprets what the user says and sees, usually delivering responses by audio. AR glasses, however, add a visual layer, showing digital data anchored within a user’s view of the environment. This allows more uses, since people receive aligned visual information, not just audio.

Some AI glasses offer a limited heads-up display (HUD), but unlike full AR, their simple text or icons remain fixed on the lens and do not interact with real-world objects.

The main difference is how information is placed. In augmented reality, digital information is tied to real objects. For example, if someone looks at a sign in another language and asks for a translation, HUD glasses display the translation at the same spot on the display, even if the user moves their head.

With AR glasses, the translated text disappears when the sign is no longer in view and reappears when it becomes visible again. This means the information is linked to actual-world locations, not just where the user is looking.

To achieve AR effects, AI glasses process camera input to map surroundings, measure depth, and overlay 3D images in real time.

HUD-based devices are blurring the line between AI and AR glasses, offering features such as real-time translation, navigation guidance, and AI-powered smart responses. Smart Analytics Global forecasts that HUD-type AI glasses may surpass voice-only models in popularity from 2028 onward, as visual AI capabilities advance and hardware evolves.

Real augmented reality glasses need to understand the physical world and place virtual objects in it. This requires technologies like area mapping, depth measurement, movement tracking, eye and gesture recognition, and 3D image creation.

SLAM (Simultaneous Localisation and Mapping) helps devices determine location and map surroundings by detecting objects and creating 3D points.

Depth sensors map surroundings and measure distances. If mapping fails, digital objects move on the display. Eye-tracking ensures that digital information aligns with the user’s gaze.

Gesture recognition lets users control devices or screens without physical controllers. Some wearables can use wristbands to detect muscle signals for commands.

High computing power is needed. To avert delays, processing is mainly done on the device.

Qualcomm’s Snapdragon XR chips are common in extended and augmented reality devices and feature in projects with companies including Google, Samsung, and Xreal. At MWC 2026, MediaTek unveiled AI glasses powered by its Dimensity 9500 chip.

MediaTek did not say it would make its own AI glasses, but it showed a prototype to show that its chips could be used instead of Qualcomm’s. The Dimensity 9500 chip, released last year, includes a dedicated AI unit that can run AI tasks on the device without relying on external servers. It can handle text, images, voice, and video simultaneously, enabling features such as smart responses and real-time translation.

Despite advancements, hardware challenges persist. AR glasses must accommodate cameras, 3D sensors, batteries, and cooling in a lightweight frame. Mapping and processing 3D imagery demand considerable energy and produce heat, so processors must remain highly efficient.

Display technology is also a challenge. AR glasses must display virtual objects clearly while still letting users see the real world. Many designs use special optics that send images from tiny projectors through the lens to the user’s eye, often with very bright microLED or microOLED screens.

Field of view is still a major technical challenge. The field of view describes how wide virtual images appear in AR glasses. Wider angles improve realism. Experts estimate that at least seventy degrees may be necessary for AR to feel convincing.

At 30 degrees, virtual images appear as a small, floating window and disappear quickly when the user turns their head. While the numbers seem similar, a 70-degree field of view covers five times the area of a 30-degree field, making virtual screens and 3D objects look more natural. Expanding the field of view, however, requires larger displays, more complex waveguide systems, and thicker lenses. These changes increase the mass of the glasses, reduce outdoor brightness, and raise power consumption.

HUD-style AI glasses generally provide a field of view between 10 and 20 degrees. TCL typically falls within this range. RayNeo models reach up to 30 degrees, while the Microsoft HoloLens 2, discontinued in late 2024, delivered about 52 degrees. Video-oriented AR glasses like Viture’s Luma Pro and Xreal’s One Pro offer fifty to sixty degrees as they are designed primarily for indoor use.

The Xreal One Pro, which uses MediaTek’s Dimensity 9500 chip, runs independently with built-in computing and its own battery. This lets it run AI and create 3D images directly on the device.

Developers still face major obstacles in achieving a 70-degree field of view with lightweight glasses. Enhanced processors or external computing resources can increase speed and reduce heat, but the display components must still fit within the glasses.

Companies such as Meta, Apple, and Google are developing AR glasses, though releases are not expected before 2027. Google’s Android XR system will eventually support AR glasses. Samsung remains focused on XR headsets until technical challenges are resolved.

Join the Discussion


Visit StickyLock
Back to top